11 research outputs found

    Phase transitions of Integrated Information in the Generalized Ising Model of the Brain

    Get PDF
    The bold framework of the Integrated Information Theory of consciousness are explored in this thesis in the context of the generalized Ising model of the brain. Small 5-node networks are simulated on the Ising model with Metropolis transitions where the fitting parameter T is fit to empirical functional connectivity matrices of healthy human subjects. Fitting to criticality, results indicate that integrated information undergoes a phase transition at the critical temperature Tc. The results are interpreted in the context of an emerging perspective of the science of complexity and perhaps even the philosophy of science; the universe as a self-organizing critical system undergoing cascades of phase transitions into complexity

    When to be critical? Performance and evolvability in different regimes of neural Ising agents

    Full text link
    It has long been hypothesized that operating close to the critical state is beneficial for natural, artificial and their evolutionary systems. We put this hypothesis to test in a system of evolving foraging agents controlled by neural networks that can adapt agents' dynamical regime throughout evolution. Surprisingly, we find that all populations that discover solutions, evolve to be subcritical. By a resilience analysis, we find that there are still benefits of starting the evolution in the critical regime. Namely, initially critical agents maintain their fitness level under environmental changes (for example, in the lifespan) and degrade gracefully when their genome is perturbed. At the same time, initially subcritical agents, even when evolved to the same fitness, are often inadequate to withstand the changes in the lifespan and degrade catastrophically with genetic perturbations. Furthermore, we find the optimal distance to criticality depends on the task complexity. To test it we introduce a hard and simple task: for the hard task, agents evolve closer to criticality whereas more subcritical solutions are found for the simple task. We verify that our results are independent of the selected evolutionary mechanisms by testing them on two principally different approaches: a genetic algorithm and an evolutionary strategy. In summary, our study suggests that although optimal behaviour in the simple task is obtained in a subcritical regime, initializing near criticality is important to be efficient at finding optimal solutions for new tasks of unknown complexity.Comment: arXiv admin note: substantial text overlap with arXiv:2103.1218

    Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks

    Full text link
    Recurrent neural networks (RNNs) in the brain and in silico excel at solving tasks with intricate temporal dependencies. Long timescales required for solving such tasks can arise from properties of individual neurons (single-neuron timescale, Ï„\tau, e.g., membrane time constant in biological neurons) or recurrent interactions among them (network-mediated timescale). However, the contribution of each mechanism for optimally solving memory-dependent tasks remains poorly understood. Here, we train RNNs to solve NN-parity and NN-delayed match-to-sample tasks with increasing memory requirements controlled by NN by simultaneously optimizing recurrent weights and Ï„\taus. We find that for both tasks RNNs develop longer timescales with increasing NN, but depending on the learning objective, they use different mechanisms. Two distinct curricula define learning objectives: sequential learning of a single-NN (single-head) or simultaneous learning of multiple NNs (multi-head). Single-head networks increase their Ï„\tau with NN and are able to solve tasks for large NN, but they suffer from catastrophic forgetting. However, multi-head networks, which are explicitly required to hold multiple concurrent memories, keep Ï„\tau constant and develop longer timescales through recurrent connectivity. Moreover, we show that the multi-head curriculum increases training speed and network stability to ablations and perturbations, and allows RNNs to generalize better to tasks beyond their training regime. This curriculum also significantly improves training GRUs and LSTMs for large-NN tasks. Our results suggest that adapting timescales to task requirements via recurrent interactions allows learning more complex objectives and improves the RNN's performance

    The emergence of integrated information, complexity, and \u27consciousness\u27 at criticality

    Get PDF
    © 2020 by the authors. Integrated Information Theory (IIT) posits that integrated information (F) represents the quantity of a conscious experience. Here, the generalized Ising model was used to calculate F as a function of temperature in toy models of fully connected neural networks. A Monte-Carlo simulation was run on 159 normalized, random, positively weighted networks analogous to small five-node excitatory neural network motifs. Integrated information generated by this sample of small Ising models was measured across model parameter spaces. It was observed that integrated information, as an order parameter, underwent a phase transition at the critical point in the model. This critical point was demarcated by the peak of the generalized susceptibility (or variance in configuration due to temperature) of integrated information. At this critical point, integrated information was maximally receptive and responsive to perturbations of its own states. The results of this study provide evidence that F can capture integrated information in an empirical dataset, and display critical behavior acting as an order parameter from the generalized Ising model

    Role of Dimensionality in Predicting the Spontaneous Behavior of the Brain Using the Classical Ising Model and the Ising Model Implemented on a Structural Connectome

    Get PDF
    © Pubuditha M. Abeyasinghe et al. 2018. There is accumulating evidence that spontaneous fluctuations of the brain are sustained by a structural architecture of axonal fiber bundles. Various models have been used to investigate this structure-function relationship. In this work, we implemented the Ising model using the number of fibers between each pair of brain regions as input. The output of the Ising model simulations on a structural connectome was then compared with empirical functional connectivity data. A simpler two-dimensional classical Ising model was used as the baseline model for comparison purpose. Thermodynamic properties, such as the magnetic susceptibility and the specific heat, illustrated a phase transition from an ordered phase to a disordered phase at the critical temperature. Despite the differences between the two models, the lattice Ising model and the Ising model implemented on a structural connectome (the generalized Ising model) exhibited similar patterns of global properties. To study the behavior of the generalized Ising model around criticality, calculation of the dimensionality and critical exponents was performed for the first time, by introducing a new concept of distance based on structural connectivity. Same value inside the fitting error was found for the dimensionality in both models suggesting similar behavior of the models around criticality

    The dynamical regime and its importance for evolvability, task performance and generalization

    No full text
    It has long been hypothesized that operating close to the critical state is beneficial for natural and artificial systems. We test this hypothesis by evolving foraging agents controlled by neural networks that can change the system's dynamical regime throughout evolution. Surprisingly, we find that all populations, regardless of their initial regime, evolve to be subcritical in simple tasks and even strongly subcritical populations can reach comparable performance. We hypothesize that the moderately subcritical regime combines the benefits of generalizability and adaptability brought by closeness to criticality with the stability of the dynamics characteristic for subcritical systems. By a resilience analysis, we find that initially critical agents maintain their fitness level even under environmental changes and degrade slowly with increasing perturbation strength. On the other hand, subcritical agents originally evolved to the same fitness, were often rendered utterly inadequate and degraded faster. We conclude that although the subcritical regime is preferable for a simple task, the optimal deviation from criticality depends on the task difficulty: for harder tasks, agents evolve closer to criticality. Furthermore, subcritical populations cannot find the path to decrease their distance to criticality. In summary, our study suggests that initializing models near criticality is important to find an optimal and flexible solution.Comment: 8 Pages, 7 Figures, Artificial Life Conference 202
    corecore